Cryptography is hard to get right, and it’s hard to measure the security of an implementation.
Engineering is building with accountability for what you build.
Our checks need to be evidenced, and reproducible by any third-party.
The software industry has lots of experience in proving the safety of programs, e.g. in aircraft.
Proving safety is relative to a specification, and some given clear assumptions (e.g. that the hardware is working correctly).
Usually the only thing we care about is proving the software won’t crash/break. There is typically a human in the loop who can correct any incorrect outputs.
UK ATC Ada Formal Verification Crash
We want our proofs to be run by automated reasoning tools, as these tools themselves are proven, while a human isn’t.
A type system is an example of a reasoning tool. The big goal of a system is that if something goes wrong, we should immediately crash, so we’re not executing undefined/bad behaviour.
An interactive proof assistant proves validity/soundness of a mathematical proof from a very small Trusted Computing Base
In Cryptography, the analog for a TCB is standards provided by governments, or standards agency. Which we can build on and trust are safe.
In cryptography, abstraction can be dangerous because you don’t have control over the implementation of any abstraction layers you use.
For example, TLS abstracts over the network layer and assumes that the network layer either transmits a message in whole or not at all. In the cases where this isn’t true, attack are possible.
There are standards for how high-assurance a system is (Common Criteria EAL 4+).
We want to get high-assurance guarantees on complex composed systems despite abstractions.
![[Towards High-Assurance Cryptographic Engineering 2025-03-17 10.03.18.excalidraw]]
Pick a leakage model:
![[Towards High-Assurance Cryptographic Engineering 2025-03-17 11.18.26.excalidraw]]
Prove that given two secrets that operate on the same public value, the leakage is the same. E.g. that given two plaintexts to encrypt, the adversary can’t predict which one we operated on.
![[Towards High-Assurance Cryptographic Engineering 2025-03-17 11.21.17.excalidraw]]
Proving that the leakage is indistinguishable between an operation that doesn’t use the secret.
Simulation is ideal if we are composing modules, because we can just compose the simulations.
Simulation implies non-interference, because if you have an adversary that breaks non-interference, you can turn it into one that breaks simulation by just dropping one of the secrets.
“Assembly in the head” You have high-level constructs like arrays, variables, etc.
Each LOC compiles down to at most one line of assembly so you can tell what the compiler will emit.
Compiler does useful things like fixed memory footprint, zeroisation, etc.